-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[https://nvbugs/5498478][fix] Fix eagle3 fp8 kv target model + bf16 draft model + chunked prefill #7805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[https://nvbugs/5498478][fix] Fix eagle3 fp8 kv target model + bf16 draft model + chunked prefill #7805
Conversation
4a85e76 to
87ec43c
Compare
📝 WalkthroughWalkthroughA new boolean flag is_eagle3 is added and propagated through Python and C++ attention paths. It gates FP8-related scaling/paths in dispatch and initialization, adjusts parameter conversion, extends spec-decoding boolean params to four, updates planning/forward APIs, and augments tests to toggle FP8 target behavior. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant Module as Attention Module
participant Wrapper as TrtllmAttentionWrapper
participant THOP as THOP Binding
participant CppOp as AttentionOp (C++)
participant Dispatch as Fused QKV Dispatch
User->>Module: forward(..., possibly is_eagle3)
Module->>Wrapper: forward(..., is_eagle3=getattr(self,"is_eagle3", False))
Wrapper->>Wrapper: plan(..., is_eagle3)
Wrapper->>THOP: spec_decoding_bool_params[..., is_eagle3]
THOP->>CppOp: mIsEagle3 = bools[3]
CppOp->>Dispatch: enqueueGeneration(..., is_eagle3)
Dispatch->>Dispatch: Gate FP8 scaling/paths if is_eagle3
Dispatch-->>CppOp: results
CppOp-->>Wrapper: attention output
Wrapper-->>Module: output
Module-->>User: output
note over Dispatch,CppOp: When is_eagle3=true, skip static activation scaling,<br/>FP8 context FMHA scaling, and related scale settings.
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🧪 Early access (Sonnet 4.5): enabledWe are currently testing the Sonnet 4.5 model, which is expected to improve code review quality. However, this model may lead to increased noise levels in the review comments. Please disable the early access features if the noise level causes any inconvenience. Note:
Comment |
|
/bot run |
|
PR_Github #18997 [ run ] triggered by Bot |
|
@DylanChen-NV I am not quite sure I understand the fix here. if we disable FP8 context fmha directly, does it mean chunked prefill won't work ? I think the problem is that we should add more clear debug messages to make sure users disable the chunked prefill -> paged context fmha disabled -> fp8 context fmha disabled. see https://github.com/NVIDIA/TensorRT-LLM/blob/main/cpp/tensorrt_llm/thop/attentionOp.cpp#L606 |
|
PR_Github #18997 [ run ] completed with state |
|
/bot run |
|
PR_Github #19010 [ run ] triggered by Bot |
|
PR_Github #19010 [ run ] completed with state |
| # Context MLA uses separate qkv instead of paged_context_fmha | ||
| use_paged_context_fmha = False | ||
|
|
||
| is_eagle3 = kwargs.get("is_eagle3", False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a is_eagle3: bool = False instead of using kwargs.
|
|
||
| // FP8 FMHA should be used with fp8 workflow together. | ||
| if (mFP8ContextFMHA || mFP8ContextMLA) | ||
| if ((mFP8ContextFMHA || mFP8ContextMLA) && !mIsEagle3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need mIsEagle3? Can we set mFP8ContextMLA/mPagedContextFMHA to false in cpp/tensorrt_llm/thop/attentionOp.cpp instead so that the common attention op can keep unchanged?
87ec43c to
224a486
Compare
|
/bot run |
|
PR_Github #20348 [ run ] triggered by Bot |
|
PR_Github #20348 [ run ] completed with state |
224a486 to
a9d1c08
Compare
|
/bot run |
|
PR_Github #20706 [ run ] triggered by Bot |
|
PR_Github #20706 [ run ] completed with state |
|
/bot run |
|
PR_Github #20719 [ run ] triggered by Bot |
|
PR_Github #20719 [ run ] completed with state |
a9d1c08 to
ba293b3
Compare
|
/bot run |
|
PR_Github #20845 [ run ] triggered by Bot |
|
PR_Github #20953 [ run ] triggered by Bot |
|
PR_Github #20953 [ run ] completed with state |
5570d3b to
a7276c8
Compare
|
/bot run |
|
PR_Github #20979 [ run ] triggered by Bot |
|
PR_Github #20979 [ run ] completed with state |
a7276c8 to
c02bcc4
Compare
|
/bot run |
|
PR_Github #21017 [ run ] triggered by Bot |
|
PR_Github #21017 [ run ] completed with state |
Signed-off-by: Dylan Chen <[email protected]>
Signed-off-by: Dylan Chen <[email protected]>
Signed-off-by: Dylan Chen <[email protected]>
Signed-off-by: Dylan Chen <[email protected]>
Signed-off-by: Dylan Chen <[email protected]>
c02bcc4 to
b81a11a
Compare
Signed-off-by: Dylan Chen <[email protected]>
|
/bot run |
|
PR_Github #21142 [ run ] triggered by Bot |
|
PR_Github #21142 [ run ] completed with state |
...tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplJIT/decoderXQAImplJIT.cpp
Show resolved
Hide resolved
| # elif fp8_fmha_for_eagle3: | ||
| elif self.has_fp8_kv_cache and not self.has_fp8_qdq and out_scale is not None: | ||
| # Force to use FP8 FMHA for (eagle3 + FP8 target model + BF16/FP16 draft model) in draft layers | ||
| out_dtype = torch.float8_e4m3fn |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that is said, this is not true for all cases. on Blackwell, the fp8 fmha kernels can output bf16 directly. In which case, we want to avoid explicitly doing the conversion after attention op.
we better add a flag or something (it is not clear to me yet), which is false by default, so that it won't break other workflows.
| mrope_config["mrope_position_deltas"] = mrope_position_deltas | ||
|
|
||
| # Be forced to use FP8 FMHA for BF16/FP16 model with FP8 KV cache (e.g. eagle3 + FP8 target model + BF16/FP16 draft model) | ||
| forced_to_fp8_fmha = not self.has_quant_scale and self.quant_config is not None and self.quant_config.layer_quant_mode.has_fp8_kv_cache( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above. we can add the conversion kernel inside the attention op (https://github.com/NVIDIA/TensorRT-LLM/blob/main/cpp/tensorrt_llm/common/attentionOp.cpp), so that if the output dtype is not support on Hopper/Ampere (using fmha_v2), we can invoke the conversion kernel. Exposing the logic outside the attention will complicate the design as this is only needed by fmha_v2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @PerkzZheng I have moved the logic to attentionOp, and have distinguished the behaviors of Blackwell and pre-Blackwell. CI failure has been fixed locally. Could you please review it again? Thanks.
Signed-off-by: Dylan Chen <[email protected]>
|
/bot run |
|
PR_Github #21351 [ run ] triggered by Bot |
|
PR_Github #21351 [ run ] completed with state |
Signed-off-by: Dylan Chen <[email protected]>
| // Run the fmha kernel. | ||
| mFmhaDispatcher->run(fmhaParams); | ||
| if (mFP8FmhaForEagle3 && !mFmhaDispatcher->useTllmGen() && !mFP8AttenOutput) | ||
| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we better add some comments here to describe the logic.
| if mrope_position_deltas is not None: | ||
| mrope_config["mrope_position_deltas"] = mrope_position_deltas | ||
|
|
||
| # Be forced to use FP8 FMHA for BF16/FP16 model with FP8 KV cache (e.g. eagle3 + FP8 target model + BF16/FP16 draft model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems too specific (more like a WAR). @yuxianq do you have any insights about this ? thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it is too specific. The purpose of this PR is to add a way to explicitly control whether we use fp8 fmha outside attention op. How about add a force_fp8_fmha to attention (false by default) and only enable it in eagle3 case? We don't need to add new fields to the common AttentionOp.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that makes sense to me. Thanks!
|
Closing the PR since it's so specialized. The fix has been moved to #8910 |
Description
Fix eagle3 fp8 kv target model + bf16 draft model + chunked prefill by mandating the use of FP8 FMHA in draft layers, because the attention of the 2nd chunk context phase needs to load FP8 KV cache.
Test Coverage
A test of eagle3 + FP8 KV target model + bf16 draft model + chunked prefill is added:
tests/unittest/_torch/speculative/test_eagle3.py::test_llama_eagle3[True-TRTLLM-False-True-True-True-True-True-True]
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.